Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
The booming development and huge market of micro-videos bring new e-commerce channels for merchants. Currently, more micro-video publishers prefer to embed relevant ads into their micro-videos, which not only provides them with business income but helps the audiences to discover their interesting products. However, due to the micro-video recording by unprofessional equipment, involving various topics and including multiple modalities, it is challenging to locate the products related to micro-videos efficiently, appropriately, and accurately. We formulate the microvideo-product retrieval task, which is the first attempt to explore the retrieval between the multi-modal and multi-modal instances. A novel approach named Multi-Queue Momentum Contrast (MQMC) network is proposed for bidirectional retrieval, consisting of the uni-modal feature and multi-modal instance representation learning. Moreover, a discriminative selection strategy with a multi-queue is used to distinguish the importance of different negatives based on their categories. We collect two large-scale microvideo-product datasets (MVS and MVS-large) for evaluation and manually construct the hierarchical category ontology, which covers sundry products in daily life. Extensive experiments show that MQMC outperforms the state-of-the-art baselines. Our replication package (including code, dataset, etc.) is publicly available at https://github.com/duyali2000/MQMC.
translated by 谷歌翻译
Neuroimaging-based prediction methods for intelligence and cognitive abilities have seen a rapid development in literature. Among different neuroimaging modalities, prediction based on functional connectivity (FC) has shown great promise. Most literature has focused on prediction using static FC, but there are limited investigations on the merits of such analysis compared to prediction based on dynamic FC or region level functional magnetic resonance imaging (fMRI) times series that encode temporal variability. To account for the temporal dynamics in fMRI data, we propose a deep neural network involving bi-directional long short-term memory (bi-LSTM) approach that also incorporates feature selection mechanism. The proposed pipeline is implemented via an efficient GPU computation framework and applied to predict intelligence scores based on region level fMRI time series as well as dynamic FC. We compare the prediction performance for different intelligence measures based on static FC, dynamic FC, and region level time series acquired from the Adolescent Brain Cognitive Development (ABCD) study involving close to 7000 individuals. Our detailed analysis illustrates that static FC consistently has inferior prediction performance compared to region level time series or dynamic FC for unimodal rest and task fMRI experiments, and in almost all cases using a combination of task and rest features. In addition, the proposed bi-LSTM pipeline based on region level time series identifies several shared and differential important brain regions across task and rest fMRI experiments that drive intelligence prediction. A test-retest analysis of the selected features shows strong reliability across cross-validation folds. Given the large sample size from ABCD study, our results provide strong evidence that superior prediction of intelligence can be achieved by accounting for temporal variations in fMRI.
translated by 谷歌翻译
电子能量损失光谱(EELS)光谱中编码的电离边缘实现了高级材料分析,包括组成分析和元素定量。平行鳗鱼仪器和快速,敏感探测器的开发极大地提高了鳗鱼光谱的采集速度。但是,传统的核心边缘识别方式是基于经验和人工依赖的,这限制了处理速度。到目前为止,RAW EELS光谱上核心损失边缘的低信号噪声比和低跳跃比对于边缘识别的自动化一直具有挑战性。在这项工作中,提出了卷积双向长期短期记忆神经网络(CNN-BILSTM),以使原始光谱的核心损失边缘的检测和元素鉴定自动化。通过使用我们的正向模型来协助神经网络的训练和验证,可以合成鳗鱼光谱数据库。为了使合成的光谱类似于真实光谱,我们收集了一个实验获得的鳗鱼核心边缘的大型库。在合成训练库中,边缘是通过将多高斯模型拟合到实验中的真实边缘来建模的,并模拟并添加了噪声和仪器不完美。训练有素的CNN-BILSTM网络针对从实验收集的模拟光谱和实际光谱进行了测试。该网络的高精度为94.9%,证明,如果没有对原始光谱进行复杂的预处理,则提出的CNN-BILSTM网络可以以高精度来实现鳗鱼光谱的核心损失边缘识别的自动化。
translated by 谷歌翻译
存在多种自然语言处理(NLP)任务的多种效率方法,例如修剪,蒸馏,动态推断,量化等。我们可以将效率方法视为应用于模型的操作员。自然,我们可以构建多个效率方法的管道,即,顺序将多个操作员应用于模型。在本文中,我们研究了这一想法的合理性,更重要的是,效率运营商的合理性和累积性。我们做出了两个有趣的观察结果:(1)效率运营商是可交换的 - 管道中效率方法的顺序对最终结果几乎没有影响;(2)效率运算符也是累积的 - 结合几种效率方法的最终结果可以通过组合单个方法的结果来估算。这些观察结果加深了我们对效率运营商的理解,并为其现实世界应用提供了有用的准则。
translated by 谷歌翻译
场景分类已确定为一个具有挑战性的研究问题。与单个对象的图像相比,场景图像在语义上可能更为复杂和抽象。它们的差异主要在于识别的粒度水平。然而,图像识别是场景识别良好表现的关键支柱,因为从对象图像中获得的知识可用于准确识别场景。现有场景识别方法仅考虑场景的类别标签。但是,我们发现包含详细的本地描述的上下文信息也有助于允许场景识别模型更具歧视性。在本文中,我们旨在使用对象中编码的属性和类别标签信息来改善场景识别。基于属性和类别标签的互补性,我们提出了一个多任务属性识别识别(MASR)网络,该网络学习一个类别嵌入式,同时预测场景属性。属性采集和对象注释是乏味且耗时的任务。我们通过提出部分监督的注释策略来解决该问题,其中人类干预大大减少。该策略为现实世界情景提供了更具成本效益的解决方案,并且需要减少注释工作。此外,考虑到对象检测到的分数所指示的重要性水平,我们重新进行了权威预测。使用提出的方法,我们有效地注释了四个大型数据集的属性标签,并系统地研究场景和属性识别如何相互受益。实验结果表明,与最先进的方法相比
translated by 谷歌翻译
虽然对图像背景恢复的研究从常规大小的降级图像恢复已经取得了显着的进步,但由于计算复杂性和记忆使用情况的爆炸式增长以及缺陷,恢复超高分辨率(例如4K)图像仍然是一项极具挑战性的任务。带注释的数据。在本文中,我们提出了一种用于超高分辨率图像恢复的新型模型,称为全局逐步生成网络(GLSGN),该模型采用涉及四个恢复途径的逐步恢复策略:三个局部途径和一条全球途径。本地途径着重于以局部但高分辨率的图像贴片的细粒度进行图像恢复,而全球途径则在缩放尺寸但完整的图像上执行图像恢复,以在全球视图中为本地途径提供线索包括语义和噪声模式。为了平滑这四个途径之间的相互协作,我们的GLSGN旨在确保在低级内容,感知注意力,恢复强度和高级语义方面的四个方面的跨道路一致性。作为这项工作的另一个主要贡献,我们还介绍了迄今为止的第一个超高分辨率数据集,以删除反射和降雨条纹,包括4,670个现实世界和合成图像。跨三个典型的图像背景修复任务进行的广泛实验,包括删除图像反射,删除图像雨条和图像去悬来表明我们的GLSGN始终优于最先进的方法。
translated by 谷歌翻译
基于草图的3D形状检索(SBSR)是一项重要但艰巨的任务,近年来引起了越来越多的关注。现有方法在限制设置中解决了该问题,而无需适当模拟真实的应用程序方案。为了模仿现实的设置,在此曲目中,我们采用了不同级别的绘图技能的业余爱好者以及各种3D形状的大规模草图,不仅包括CAD型号,而且还可以从真实对象扫描的模型。我们定义了两个SBSR任务,并构建了两个基准,包括46,000多个CAD型号,1,700个现实型号和145,000个草图。四个团队参加了这一轨道,并为这两个任务提交了15次跑步,由7个常用指标评估。我们希望,基准,比较结果和开源评估法会在3D对象检索社区中促进未来的研究。
translated by 谷歌翻译
标准平面(SP)定位对于常规临床超声(US)诊断至关重要。与2D US相比,3D US可以一次扫描获得多个视图平面,并通过添加冠状平面提供完整的解剖结构。但是,由于方向的可变性和巨大的搜索空间,在3D US中手动导航SPS是费力的和有偏见的。在这项研究中,我们介绍了3D US中自动SP本地化的新型增强学习(RL)框架。我们的贡献是三倍。首先,我们将3D中的SP定位作为RL中的基于切线的问题,以重组动作空间并大大降低搜索空间。其次,我们设计了一种辅助任务学习策略,以增强模型识别跨越平面搜索中非SPS和SP的微妙差异的能力。最后,我们通过同时利用空间和解剖学信息来提出空间 - 动态奖励,以有效地指导学习轨迹。我们探讨了我们方法在子宫和胎儿脑数据集上定位四个SP的功效。实验表明,我们的方法达到了较高的定位精度以及稳健的性能。
translated by 谷歌翻译
深度神经网络的成功在很大程度上取决于大量高质量注释的数据的可用性,但是这些数据很难或昂贵。由此产生的标签可能是类别不平衡,嘈杂或人类偏见。从不完美注释的数据集中学习无偏分类模型是一项挑战,我们通常会遭受过度拟合或不足的折磨。在这项工作中,我们彻底研究了流行的软马克斯损失和基于保证金的损失,并提供了一种可行的方法来加强通过最大化最小样本余量来限制的概括误差。我们为此目的进一步得出了最佳条件,该条件指示了类原型应锚定的方式。通过理论分析的激励,我们提出了一种简单但有效的方法,即原型锚定学习(PAL),可以轻松地将其纳入各种基于学习的分类方案中以处理不完美的注释。我们通过对合成和现实世界数据集进行广泛的实验来验证PAL对班级不平衡学习和降低噪声学习的有效性。
translated by 谷歌翻译